The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Ordinary Differential Equations (ODE)-based models have become popular foundation models to solve many time-series problems. Combining neural ODEs with traditional RNN models has provided the best representation for irregular time series. However, ODE-based models require the trajectory of hidden states to be defined based on the initial observed value or the last available observation. This fact raises questions about how long the generated hidden state is sufficient and whether it is effective when long sequences are used instead of the typically used shorter sequences. In this article, we introduce CrossPyramid, a novel ODE-based model that aims to enhance the generalizability of sequences representation. CrossPyramid does not rely only on the hidden state from the last observed value; it also considers ODE latent representations learned from other samples. The main idea of our proposed model is to define the hidden state for the unobserved values based on the non-linear correlation between samples. Accordingly, CrossPyramid is built with three distinctive parts: (1) ODE Auto-Encoder to learn the best data representation. (2) Pyramidal attention method to categorize the learned representations (hidden state) based on the relationship characteristics between samples. (3) Cross-level ODE-RNN to integrate the previously learned information and provide the final latent state for each sample. Through extensive experiments on partially-observed synthetic and real-world datasets, we show that the proposed architecture can effectively model the long gaps in intermittent series and outperforms state-of-the-art approaches. The results show an average improvement of 10\% on univariate and multivariate datasets for both forecasting and classification tasks.
translated by 谷歌翻译
A key challenge in federated learning (FL) is the statistical heterogeneity that impairs the generalization of the global model on each client. To address this, we propose a method Federated learning with Adaptive Local Aggregation (FedALA) by capturing the desired information in the global model for client models in personalized FL. The key component of FedALA is an Adaptive Local Aggregation (ALA) module, which can adaptively aggregate the downloaded global model and local model towards the local objective on each client to initialize the local model before training in each iteration. To evaluate the effectiveness of FedALA, we conduct extensive experiments with five benchmark datasets in computer vision and natural language processing domains. FedALA outperforms eleven state-of-the-art baselines by up to 3.27% in test accuracy. Furthermore, we also apply ALA module to other federated learning methods and achieve up to 24.19% improvement in test accuracy.
translated by 谷歌翻译
最近,Diffenderfer和Kailkhura提出了一种新的范式,仅通过修剪和量化随机加权的全精度神经网络,以学习紧凑而高度准确的二进制神经网络。但是,这些多质票(MPTS)的准确性对最佳的修剪比率高度敏感,这限制了其适用性。此外,原始实施没有获得任何培训或推理速度益处。在本报告中,我们讨论了克服这些局限性的几项改进。我们通过在CIFAR-10上进行实验来展示提出的技术的好处。
translated by 谷歌翻译
在本文中,我们提出了一条新型的管道,该管道利用语言基础模型进行时间顺序模式挖掘,例如人类的移动性预测任务。例如,在预测利益(POI)客户流量的任务中,通常从历史日志中提取访问次数,并且仅使用数值数据来预测访客流。在这项研究中,我们直接对包含各种信息的自然语言输入执行预测任务,例如数值和上下文的语义信息。引入特定的提示以将数值时间序列转换为句子,以便可以直接应用现有的语言模型。我们设计了一个Auxmoblcast管道,用于预测每个POI中的访问者数量,将辅助POI类别分类任务与编码器架构结构集成在一起。这项研究提供了所提出的Auxmoblcast管道有效性以发现移动性预测任务中的顺序模式的经验证据。在三个现实世界数据集上评估的结果表明,预训练的语言基础模型在预测时间序列中也具有良好的性能。这项研究可以提供有远见的见解,并为预测人类流动性提供新的研究方向。
translated by 谷歌翻译
由于其在多语言翻译,自动驾驶等方面的广泛应用,因此场景文本识别引起了近年来的兴趣。在本报告中,我们描述了我们对词汇表场上的解决方案的解决方案,该解决方案是词汇表场上的文本理解(OOV-ST)挑战,旨在从自然场景图像中提取胶卷外(OOV)单词。我们基于OCLIP的模型在H-Mean中获得28.59 \%,在ECCV2022 TIE Workshop中对OOV挑战的端到端OOV单词识别曲目排名第一。
translated by 谷歌翻译
视觉变压器(VIT)正在出现,并且在计算机视觉任务中的准确性显着提高。但是,它们的复杂架构和巨大的计算/存储需求对新硬件加速器设计方法施加了紧迫的需求。这项工作提出了基于提议的混合速度量化的FPGA感知自动VIT加速框架。据我们所知,这是探索模型量化的第一个基于FPGA的VIT加速框架。与最先进的VIT量化工作(仅无硬件加速的算法方法)相比,我们的量化在相同的位宽度下可实现0.47%至1.36%的TOP-1精度。与32位浮点基线FPGA加速器相比,我们的加速器在框架速率上的提高约为5.6倍(即56.8 fps vs. 10.0 fps),对于DeitBase的ImagEnet数据集,精度下降了0.71%。
translated by 谷歌翻译
该报告介绍了我们对ECCV 2022挑战的获胜者解决方案,挑战了播放视频的文本理解(OOV-ST):裁剪单词识别。这项挑战是在所有内容(TIE)中的ECCV 2022讲习班的背景下举行的,该研讨会(TIE)旨在从自然场景图像中提取出播出的单词。在竞争中,我们首先在合成数据集上进行预训练,然后在训练集中对模型进行数据增强进行微调。同时,针对长期和垂直文本进行了专门训练的另外两个型号。最后,我们将不同模型的输出与不同的层,不同的骨干和不同种子结合在一起。当考虑使用唱歌内和播放量的单词时,我们的解决方案的总体单词准确性为69.73%。
translated by 谷歌翻译
自我监督学习(SSL)是一个新的范式,用于学习判别性表示没有标记的数据,并且与受监督的对手相比,已经达到了可比甚至最新的结果。对比度学习(CL)是SSL中最著名的方法之一,试图学习一般性的信息表示数据。 CL方法主要是针对仅使用单个传感器模态的计算机视觉和自然语言处理应用程序开发的。但是,大多数普遍的计算应用程序都从各种不同的传感器模式中利用数据。虽然现有的CL方法仅限于从一个或两个数据源学习,但我们提出了可可(Crockoa)(交叉模态对比度学习),这是一种自我监督的模型,该模型采用新颖的目标函数来通过计算多功能器数据来学习质量表示形式不同的数据方式,并最大程度地减少了无关实例之间的相似性。我们评估可可对八个最近引入最先进的自我监督模型的有效性,以及五个公共数据集中的两个受监督的基线。我们表明,可可与所有其他方法相比,可可的分类表现出色。同样,可可比其他可用标记数据的十分之一的基线(包括完全监督的模型)的标签高得多。
translated by 谷歌翻译
学习用户序列行为嵌入非常复杂且充满挑战,因为随着时间的推移和用户功能的高尺寸,功能相互作用复杂。最近的新兴基金会模型,例如伯特及其变体,鼓励大量研究人员在该领域进行调查。但是,与自然语言处理(NLP)任务不同,用户行为模型的参数主要来自用户嵌入层,这使得大多数现有作品在训练大规模的通用用户嵌入中失败。此外,从多个下游任务中学到了用户表示,并且过去的研究工作无法解决Seesaw现象。在本文中,我们提出了SuperMoe,这是一个通用框架,旨在从多个任务中获取高质量的用户表示。具体而言,用户行为序列是由MOE Transformer编码的,因此我们可以将模型容量提高到数十亿个参数,甚至可以将模型能力提高到数万亿个参数。为了在跨多个任务学习时处理Seesaw现象,我们使用任务指标设计了新的损失功能。我们在公共数据集和私人现实世界业务方案上进行了广泛的离线实验。我们的方法在最新模型上取得了最佳性能,结果证明了我们框架的有效性。
translated by 谷歌翻译